11 research outputs found

    Assume-Guarantee Abstraction Refinement for Probabilistic Systems

    Full text link
    We describe an automated technique for assume-guarantee style checking of strong simulation between a system and a specification, both expressed as non-deterministic Labeled Probabilistic Transition Systems (LPTSes). We first characterize counterexamples to strong simulation as "stochastic" trees and show that simpler structures are insufficient. Then, we use these trees in an abstraction refinement algorithm that computes the assumptions for assume-guarantee reasoning as conservative LPTS abstractions of some of the system components. The abstractions are automatically refined based on tree counterexamples obtained from failed simulation checks with the remaining components. We have implemented the algorithms for counterexample generation and assume-guarantee abstraction refinement and report encouraging results.Comment: 23 pages, conference paper with full proof

    Learning to divide and conquer: applying the L* algorithm to automate assume-guarantee reasoning

    No full text
    Assume-guarantee reasoning enables a “divide-and-conquer ” approach to the verification of large systems that checks system components separately while using assumptions about each component’s environment. Developing appropriate assumptions used to be a difficult and manual process. Over the past five years, we have developed a framework for performing assume-guarantee verification of systems in an incremental and fully automated fashion. The framework uses an off-the-shelf learning algorithm to compute the assumptions. The assumptions are initially approximate and become more precise by means of counterexamples obtained by model checking components separately. The framework supports different assume-guarantee rules, both symmetric and asymmetric. Moreover, we have recently introduced alphabet refinement, which extends the assumption learning process to also infer assumption alphabets. This refinement technique starts with assumption alphabets that are a subset of the minimal interface between a component and its environment, and adds actions to it as necessary until a given property is shown to hold or to be violated in the system. We have applied the learning framework to a number of case studies that show that compositional verification by learning assumptions can be significantly more scalable than non-compositional verification. Key words: Assume-guarantee reasoning, model checking, labeled transition systems, learning, proof rules, compositional verification, safety properties.

    A compositional Minimization Approach for Large Asynchronous Design Verification

    No full text
    Abstract. This paper presents a compositional minimization approach with efficient state space reductions for verifying non-trivial asynchronous designs. These reductions can result in a reduced model that contains the exact same set of observably equivalent behavior in the original model, therefore no false counter-examples are produced at the end of verification on the reduced model. This approach allows designs that cannot be handled monolithically or with partial-order reduction to be verified without difficulty. The experimental results show significant scale-up of the compositional minimization approach using these reductions on a number of large asynchronous designs

    To compose, or not to compose, that is the question:an analysis of compositional state space generation

    No full text
    \u3cp\u3eTo combat state space explosion several compositional verification approaches have been proposed. One such approach is compositional aggregation, where a given system consisting of a number of parallel components is iteratively composed and minimised. Compositional aggregation has shown to perform better (in the size of the largest state space in memory at one time) than classical monolithic composition in a number of cases. However, there are also cases in which compositional aggregation performs much worse. It is unclear when one should apply compositional aggregation in favor of other techniques and how it is affected by action hiding and the scale of the model. This paper presents a descriptive analysis following the quantitiative experimental approach. The experiments were conducted in a controlled test bed setup in a computer laboratory environment. A total of eight scalable models with different network topologies considering a number of varying properties were investigated comprising 119 subjects. This makes it the most comprehensive study done so far on the topic. We investigate whether there is any systematic difference in the success of compositional aggregation based on the model, scaling, and action hiding. Our results indicate that both scaling up the model and hiding more behaviour has a positive influence on compositional aggregation.\u3c/p\u3

    Towards Trusting Autonomous Systems

    No full text
    © 2018, Springer International Publishing AG, part of Springer Nature. Autonomous systems are rapidly transitioning from labs into our lives. A crucial question concerns trust: in what situations will we (appropriately) trust such systems? This paper proposes three necessary prerequisites for trust. The three prerequisites are defined, motivated, and related to each other. We then consider how to realise the prerequisites. This paper aims to articulate a research agenda, and although it provides suggestions for approaches to take and directions for future work, it contains more questions than answers
    corecore